86 research outputs found

    A Backward-traversal-based Approach for Symbolic Model Checking of Uniform Strategies for Constrained Reachability

    Full text link
    Since the introduction of Alternating-time Temporal Logic (ATL), many logics have been proposed to reason about different strategic capabilities of the agents of a system. In particular, some logics have been designed to reason about the uniform memoryless strategies of such agents. These strategies are the ones the agents can effectively play by only looking at what they observe from the current state. ATL_ir can be seen as the core logic to reason about such uniform strategies. Nevertheless, its model-checking problem is difficult (it requires a polynomial number of calls to an NP oracle), and practical algorithms to solve it appeared only recently. This paper proposes a technique for model checking uniform memoryless strategies. Existing techniques build the strategies from the states of interest, such as the initial states, through a forward traversal of the system. On the other hand, the proposed approach builds the winning strategies from the target states through a backward traversal, making sure that only uniform strategies are explored. Nevertheless, building the strategies from the ground up limits its applicability to constrained reachability objectives only. This paper describes the approach in details and compares it experimentally with existing approaches implemented into a BDD-based framework. These experiments show that the technique is competitive on the cases it can handle.Comment: In Proceedings GandALF 2017, arXiv:1709.0176

    Verifying Diagnostic Software

    Get PDF
    Livingstone PathFinder (LPF) is a simulation-based computer program for verifying autonomous diagnostic software. LPF is designed especially to be applied to NASA s Livingstone computer program, which implements a qualitative-model-based algorithm that diagnoses faults in a complex automated system (e.g., an exploratory robot, spacecraft, or aircraft). LPF forms a software test bed containing a Livingstone diagnosis engine, embedded in a simulated operating environment consisting of a simulator of the system to be diagnosed by Livingstone and a driver program that issues commands and faults according to a nondeterministic scenario provided by the user. LPF runs the test bed through all executions allowed by the scenario, checking for various selectable error conditions after each step. All components of the test bed are instrumented, so that execution can be single-stepped both backward and forward. The architecture of LPF is modular and includes generic interfaces to facilitate substitution of alternative versions of its different parts. Altogether, LPF provides a flexible, extensible framework for simulation-based analysis of diagnostic software; these characteristics also render it amenable to application to diagnostic programs other than Livingstone

    State Event Models for the Formal Analysis of Human-Machine Interactions

    Get PDF
    The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework

    Learning Safe Interactions and Full-Control

    Get PDF
    This chapter is concerned with the problem of learning how to interact safely with complex automated systems. With large systems, human-machine interaction errors like automation surprises are more likely to happen. Full-control mental models are formal system abstractions embedding the required information to completely control a system and avoid interaction surprises. They represent the internal system understanding that should be achieved by perfect operators. However, this concept provides no information about how operators should reach that level of competence. This work investigates the problem of splitting the teaching of full-control mental models into smaller independent learning units. These units each allow to control a subset of the system and can be learned incrementally to control more and more features of the system. This chapter explains how to formalize the learning process based on an operator that merges mental models. On that basis, we show how to generate a set of learning units with the required properties

    Model-based verification of a security protocol for conditional access to services

    Full text link
    peer reviewedWe use the formal language LOTOS to specify and verify the robustness of the Equicrypt protocol under design in the European OKAPI project for conditional access to multimedia services. We state some desired security properties and formalize them. We describe a generic intruder process and its modelling, and show that some properties are falsified in the presence of this intruder. The diagnostic sequences can be used almost directly to exhibit the scenarios of possible attacks on the protocol. Finally, we propose an improvement of the protocol which satisfies our properties

    Comparing approaches for model-checking strategies under imperfect information and fairness constraints

    Get PDF
    Starting from Alternating-time Temporal Logic, many logics for reasoning about strategies in a system of agents have been proposed. Some of them consider the strategies that agents can play when they have partial information about the state of the system. ATLKirF is such a logic to reason about uniform strategies under unconditional fairness constraints. While this kind of logics has been extensively studied, practical approaches for solving their model- checking problem appeared only recently. This paper considers three approaches for model checking strategies under partial observability of the agents, applied to ATLKirF . These three approaches have been implemented in PyNuSMV, a Python library based on the state-of- the-art model checker NuSMV. Thanks to the experimental results obtained with this library and thanks to the comparison of the relative performance of the approaches, this paper provides indications and guidelines for the use of these verification techniques, showing that different approaches are needed in different situations

    Rich Counter-Examples for Temporal-Epistemic Logic Model Checking

    Full text link
    Model checking verifies that a model of a system satisfies a given property, and otherwise produces a counter-example explaining the violation. The verified properties are formally expressed in temporal logics. Some temporal logics, such as CTL, are branching: they allow to express facts about the whole computation tree of the model, rather than on each single linear computation. This branching aspect is even more critical when dealing with multi-modal logics, i.e. logics expressing facts about systems with several transition relations. A prominent example is CTLK, a logic that reasons about temporal and epistemic properties of multi-agent systems. In general, model checkers produce linear counter-examples for failed properties, composed of a single computation path of the model. But some branching properties are only poorly and partially explained by a linear counter-example. This paper proposes richer counter-example structures called tree-like annotated counter-examples (TLACEs), for properties in Action-Restricted CTL (ARCTL), an extension of CTL quantifying paths restricted in terms of actions labeling transitions of the model. These counter-examples have a branching structure that supports more complete description of property violations. Elements of these counter-examples are annotated with parts of the property to give a better understanding of their structure. Visualization and browsing of these richer counter-examples become a critical issue, as the number of branches and states can grow exponentially for deeply-nested properties. This paper formally defines the structure of TLACEs, characterizes adequate counter-examples w.r.t. models and failed properties, and gives a generation algorithm for ARCTL properties. It also illustrates the approach with examples in CTLK, using a reduction of CTLK to ARCTL. The proposed approach has been implemented, first by extending the NuSMV model checker to generate and export branching counter-examples, secondly by providing an interactive graphical interface to visualize and browse them.Comment: In Proceedings IWIGP 2012, arXiv:1202.422

    Reasoning about strategies under partial observability and fairness constraints

    Get PDF
    A number of extensions exist for Alternating-time Temporal Logic; some of these mix strategies and partial observability but, to the best of our knowledge, no work provides a unified framework for strategies, partial observability and fairness constraints. In this paper we propose AT LK^F_po, a logic mixing strategies under partial observability and epistemic properties of agents in a system with fairness constraints on states, and we provide a model checking algorithm for i

    Reasoning about memoryless strategies under partial observability and unconditional fairness constraints

    Get PDF
    Alternating-time Temporal Logic is a logic to reason about strategies that agents can adopt to achieve a specified collective goal. A number of extensions for this logic exist; some of them combine strategies and partial observability, some others include fairness constraints, but to the best of our knowledge no work provides a unified framework for strategies, partial observability and fairness constraints. Integration of these three concepts is important when reasoning about the capabilities of agents without full knowledge of a system, for instance when the agents can assume that the environment behaves in a fair way. We present ATLKirF, a logic combining strategies under partial observability in a system with fairness constraints on states. We introduce a model-checking algorithm for ATLKirF by extending the algorithm for a full-observability variant of the logic and we investigate its complexity. We validate our proposal with an experimental evaluation
    corecore